Goto

Collaborating Authors

 stirling number


Chromatic Feature Vectors for 2-Trees: Exact Formulas for Partition Enumeration with Network Applications

Allagan, J., Morgan, G., Langley, S., Lopez-Bonilla, R., Deriglazov, V.

arXiv.org Artificial Intelligence

We establish closed-form enumeration formulas for chromatic feature vectors of 2-trees under the bichromatic triangle constraint. These efficiently computable structural features derive from constrained graph colorings where each triangle uses exactly two colors, forbidding monochromatic and rainbow triangles, a constraint arising in distributed systems where components avoid complete concentration or isolation. For theta graphs Theta_n, we prove r_k(Theta_n) = S(n-2, k-1) for k >= 3 (Stirling numbers of the second kind) and r_2(Theta_n) = 2^(n-2) + 1, computable in O(n) time. For fan graphs Phi_n, we establish r_2(Phi_n) = F_{n+1} (Fibonacci numbers) and derive explicit formulas r_k(Phi_n) = sum_{t=k-1}^{n-1} a_{n-1,t} * S(t, k-1) with efficiently computable binomial coefficients, achieving O(n^2) computation per component. Unlike classical chromatic polynomials, which assign identical features to all n-vertex 2-trees, bichromatic constraints provide informative structural features. While not complete graph invariants, these features capture meaningful structural properties through connections to Fibonacci polynomials, Bell numbers, and independent set enumeration. Applications include Byzantine fault tolerance in hierarchical networks, VM allocation in cloud computing, and secret-sharing protocols in distributed cryptography.


Does the Barron space really defy the curse of dimensionality?

Schavemaker, Olov

arXiv.org Machine Learning

The Barron space has become famous in the theory of (shallow) neural networks because it seemingly defies the curse of dimensionality. And while the Barron space (and generalizations) indeed defies (defy) the curse of dimensionality from the POV of classical smoothness, we herein provide some evidence in favor of the idea that the Barron space (and generalizations) does (do) not defy the curse of dimensionality with a nonclassical notion of smoothness which relates naturally to "infinitely wide" shallow neural networks. Like how the Bessel potential spaces are defined via the Fourier transform, we define so-called ADZ spaces via the Mellin transform; these ADZ spaces encapsulate the nonclassical smoothness we alluded to earlier. 38 pages, will appear in the dissertation of the author


Recommendation from Raw Data with Adaptive Compound Poisson Factorization

Gouvert, Olivier, Oberlin, Thomas, Févotte, Cédric

arXiv.org Machine Learning

Count data are often used in recommender systems: they are widespread (song play counts, product purchases, clicks on web pages) and can reveal user preference without any explicit rating from the user. Such data are known to be sparse, over-dispersed and bursty, which makes their direct use in recommender systems challenging, often leading to pre-processing steps such as binarization. The aim of this paper is to build recommender systems from these raw data, by means of the recently proposed compound Poisson Factorization (cPF). The paper contributions are three-fold: we present a unified framework for discrete data (dcPF), leading to an adaptive and scalable algorithm; we show that our framework achieves a trade-off between Poisson Factorization (PF) applied to raw and binarized data; we study four specific instances that are relevant to recommendation and exhibit new links with combinatorics. Experiments with three different datasets show that dcPF is able to effectively adjust to over-dispersion, leading to better recommendation scores when compared with PF on either raw or binarized data.


What is the distribution of the number of unique original items in a bootstrap sample?

Mendelson, Alex F., Zuluaga, Maria A., Hutton, Brian F., Ourselin, Sébastien

arXiv.org Machine Learning

Sampling with replacement occurs in many settings in machine learning, notably in the bagging ensemble technique and the .632+ validation scheme. The number of unique original items in a bootstrap sample can have an important role in the behaviour of prediction models learned on it. Indeed, there are uncontrived examples where duplicate items have no effect. The purpose of this report is to present the distribution of the number of unique original items in a bootstrap sample clearly and concisely, with a view to enabling other machine learning researchers to understand and control this quantity in existing and future resampling techniques. We describe the key characteristics of this distribution along with the generalisation for the case where items come from distinct categories, as in classification. In both cases we discuss the normal limit, and conduct an empirical investigation to derive a heuristic for when a normal approximation is permissible.